Assignment 3 : Neural Volume Rendering and Neural Surface Rendering
...

Course - 16825 : Learning for 3D Vision
Name - Parth Nilesh Shah
AndrewId - pnshah


A. Neural Volume Rendering
...

0. Transmittance Calculation
...

transmittance.png

1. Differentiable Volume Rendering
...

1.3 Ray Sampling
...

GridRays
xy_grid_vis_0.pngray_vis_0.png

1.4 Point Sampling
...

Sample Points
sample_points_0.png

1.5 Volume Rendering
...

Spiral RenderDepth
part_1.gifdepth_0.png

2. Optimizing a basic implicit volume
...

Spiral Render
part_2.gif

Box Center - (0.25, 0.25, 0.00)
Box Side Length - (2.00, 1.50, 1.50)

3. Optimizing a NeRF
...

Note : No View Dependence

Epoch 10Epoch 50Epoch 100Epoch 150Epoch 240
part_3_nerf_10.gifpart_3_nerf_50.gifpart_3_nerf_100.gifpart_3_nerf_150.gifpart_3_nerf_240.gif

4. NeRF Extras
...

4.1 View Dependence
...

LegoMaterials
part_3_240.gifpart_4_materials_240.gif

Trade-Offs
Incorporating view dependence enables Nerf to produce more realistic renders, capturing intricate effects like specular lights and nuanced shading variations. However, a potential drawback emerges as this addition raises the risk of Nerf overfitting to training views, potentially leading the model to associate colors and effects too closely with specific viewpoints rather than generalizing across the object structure. Consequently, novel view renderings may suffer, lacking the fidelity expected due to the network's unfamiliarity with these new directions.

B. Neural Surface Rendering
...

5.1 Sphere Tracing
...

Algorithm :

  1. Initialize a set of points for all rays with origin
  2. Iterate (for max_iters):
    1. Find Distance To Collisison at all points
    2. Update points by adding the distance (SDF value) in the ray direction
  3. Apply a threshold to distance (SDF Value) for all updated points, to get mask

Code Implementation :


    def sphere_tracing(self, implicit_fn,
	     origins, # Nx3
        directions, # Nx3
    ):        
        points = origins
        for i in range(self.max_iters):
            distance = implicit_fn(points)
            points = points + directions * distance
        
        mask = implicit_fn(points) < 1
        return points, mask
Visualization
part_5.gif

6. Optimizing a Neural SDF
...

InputPrediction
part_6_input.gifpart_6.gif

7. VolSDF
...

Best Results (alpha = 10, beta = 0.05)

part_7.gifpart_7_geometry.gif

Hyperparameter Tuning --
Changing alpha, (beta = 0.05) -

alpha = 50
part_7_a50.gifpart_7_geometry_a50.gif
alpha = 10
part_7.gifpart_7_geometry.gif
alpha = 1
part_7_a1.gifpart_7_geometry_a1.gif

Changing beta, (alpha = 10) -

beta = 0.01
part_7_a50.gifpart_7_geometry_a50.gif
beta = 0.05
part_7.gifpart_7_geometry.gif
beta = 0.5
part_7_b_5.gifpart_7_geometry-beta0.5.gif

Analysis -

  1. How does high beta bias your learned SDF? What about low beta?
    High Beta -> smoother and diffused surface
    Low Beta -> sharper surface with jagged edges

  2. Which beta value is better for training with volume rendering?
    High Beta.
    Reason - higher beta leads to a smoother and broader surface leading to a higher overlap with the true surface helping the optimization convergence.
    Lower beta might cause overfitting risks and unstable gradients making training difficult.

  3. Would you be more likely to learn an accurate surface with high beta or low beta? Why?
    Low beta helps in capturing sharp edges and fine details, thus making it better for learning a very accurate surface.

8. Neural Surfaces Extra
...

8.2 Fewer Training Views

VolSDFNERF
20 views
part_7_n20.gifpart_8_n20.gif